105 research outputs found

    An Intelligent Early Warning System for Software Quality Improvement and Project Management

    Get PDF
    One of the main reasons behind unfruitful software development projects is that it is often too late to correct the problems by the time they are detected. It clearly indicates the need for early warning about the potential risks. In this paper, we discuss an intelligent software early warning system based on fuzzy logic using an integrated set of software metrics. It helps to assess risks associated with being behind schedule, over budget, and poor quality in software development and maintenance from multiple perspectives. It handles incomplete, inaccurate, and imprecise information, and resolve conflicts in an uncertain environment in its software risk assessment using fuzzy linguistic variables, fuzzy sets, and fuzzy inference rules. Process, product, and organizational metrics are collected or computed based on solid software models. The intelligent risk assessment process consists of the following steps: fuzzification of software metrics, rule firing, derivation and aggregation of resulted risk fuzzy sets, and defuzzification of linguistic risk variables

    INFOPHARE: Newsletter of the Phare Information Office. July 1995 Issue 8.

    Get PDF
    We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise | where an "-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ϵ) in the total variation distance, which is optimal up to a universal constant that is independent of the dimension. In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of p 2 and the running time is polynomial in d and 1/ϵ. When both the mean and covariance are unknown, the running time is polynomial in d and quasipolynomial in 1/ϵ. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings

    Robust Estimators in High Dimensions without the Computational Intractability

    Get PDF
    We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an epsilon fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, the only known approaches are either computationally inefficient or lose dimension dependent factors in their error guarantees. This raises the following question: Is high-dimensional agnostic distribution learning even possible, algorithmically? In this work, we obtain the first computationally efficient algorithms for agnostically learning several fundamental classes of high-dimensional distributions: (1) a single Gaussian, (2) a product distribution on the hypercube, (3) mixtures of two product distributions (under a natural balancedness condition), and (4) mixtures of k Gaussians with identical spherical covariances. All our algorithms achieve error that is independent of the dimension, and in many cases depends nearly-linearly on the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems.United States. Office of Naval Research (Grant N00014-12-1-0999)National Science Foundation (U.S.) (CAREER Award CCF-1453261)National Science Foundation (U.S.) (CAREER Award CCF-0953960)Google (Firm) (Faculty Research Award)National Science Foundation (U.S.). Graduate Research Fellowship ProgramNEC Corporatio

    Retrieval of Precise Radial Velocities from Near-Infrared High Resolution Spectra of Low Mass Stars

    Get PDF
    Given that low-mass stars have intrinsically low luminosities at optical wavelengths and a propensity for stellar activity, it is advantageous for radial velocity (RV) surveys of these objects to use near-infrared (NIR) wavelengths. In this work we describe and test a novel RV extraction pipeline dedicated to retrieving RVs from low mass stars using NIR spectra taken by the CSHELL spectrograph at the NASA Infrared Telescope Facility, where a methane isotopologue gas cell is used for wavelength calibration. The pipeline minimizes the residuals between the observations and a spectral model composed of templates for the target star, the gas cell, and atmospheric telluric absorption; models of the line spread function, continuum curvature, and sinusoidal fringing; and a parameterization of the wavelength solution. The stellar template is derived iteratively from the science observations themselves without a need for separate observations dedicated to retrieving it. Despite limitations from CSHELL's narrow wavelength range and instrumental systematics, we are able to (1) obtain an RV precision of 35 m/s for the RV standard star GJ 15 A over a time baseline of 817 days, reaching the photon noise limit for our attained SNR, (2) achieve ~3 m/s RV precision for the M giant SV Peg over a baseline of several days and confirm its long-term RV trend due to stellar pulsations, as well as obtain nightly noise floors of ~2 - 6 m/s, and (3) show that our data are consistent with the known masses, periods, and orbital eccentricities of the two most massive planets orbiting GJ 876. Future applications of our pipeline to RV surveys using the next generation of NIR spectrographs, such as iSHELL, will enable the potential detection of Super-Earths and Mini-Neptunes in the habitable zones of M dwarfs.Comment: 64 pages, 28 figures, 5 tables. Accepted for publication in PAS
    corecore